144 research outputs found

    Functional Liftings of Vectorial Variational Problems with Laplacian Regularization

    Full text link
    We propose a functional lifting-based convex relaxation of variational problems with Laplacian-based second-order regularization. The approach rests on ideas from the calibration method as well as from sublabel-accurate continuous multilabeling approaches, and makes these approaches amenable for variational problems with vectorial data and higher-order regularization, as is common in image processing applications. We motivate the approach in the function space setting and prove that, in the special case of absolute Laplacian regularization, it encompasses the discretization-first sublabel-accurate continuous multilabeling approach as a special case. We present a mathematical connection between the lifted and original functional and discuss possible interpretations of minimizers in the lifted function space. Finally, we exemplarily apply the proposed approach to 2D image registration problems.Comment: 12 pages, 3 figures; accepted at the conference "Scale Space and Variational Methods" in Hofgeismar, Germany 201

    AMS radiocarbon dating of large za baobabs (Adansonia za) of Madagascar

    Get PDF
    © The Author(s), 2016. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in PLoS One 11 (2016): e0146977, doi:10.1371/journal.pone.0146977 .The article reports the radiocarbon investigation of Anzapalivoro, the largest za baobab (Adansonia za) specimen of Madagascar and of another za, namely the Big cistern baobab. Several wood samples collected from the large inner cavity and from the outer part/exterior of the tree were investigated by AMS (accelerator mass spectrometry) radiocarbon dating. For samples collected from the cavity walls, the age values increase with the distance into the wood up to a point of maximum age, after which the values decrease toward the outer part. This anomaly of age sequences indicates that the inner cavity of Anzapalivoro is a false cavity, practically an empty space between several fused stems disposed in a ring-shaped structure. The radiocarbon date of the oldest sample was 780 ± 30 bp, which corresponds to a calibrated age of around 735 yr. Dating results indicate that Anzapalivoro has a closed ring-shaped structure, which consists of 5 fused stems that close a false cavity. The oldest part of the biggest za baobab has a calculated age of 900 years. We also disclose results of the investigation of a second za baobab, the Big cistern baobab, which was hollowed out for water storage. This specimen, which consists of 4 fused stems, was found to be around 260 years old

    PCA-based lung motion model

    Full text link
    Organ motion induced by respiration may cause clinically significant targeting errors and greatly degrade the effectiveness of conformal radiotherapy. It is therefore crucial to be able to model respiratory motion accurately. A recently proposed lung motion model based on principal component analysis (PCA) has been shown to be promising on a few patients. However, there is still a need to understand the underlying reason why it works. In this paper, we present a much deeper and detailed analysis of the PCA-based lung motion model. We provide the theoretical justification of the effectiveness of PCA in modeling lung motion. We also prove that under certain conditions, the PCA motion model is equivalent to 5D motion model, which is based on physiology and anatomy of the lung. The modeling power of PCA model was tested on clinical data and the average 3D error was found to be below 1 mm.Comment: 4 pages, 1 figure. submitted to International Conference on the use of Computers in Radiation Therapy 201

    Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    Get PDF
    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented.Comment: Resubmitted to Physics in Medicine and Biology. Text has been modified according to referee comments, and typos in the equations have been correcte

    Parallelizing the Chambolle Algorithm for Performance-Optimized Mapping on FPGA Devices

    Get PDF
    The performance and the efficiency of recent computing platforms have been deeply influenced by the widespread adoption of hardware accelerators, such as Graphics Processing Units (GPUs) or Field Programmable Gate Arrays (FPGAs), which are often employed to support the tasks of General Purpose Processors (GPP). One of the main advantages of these accelerators over their sequential counterparts (GPPs) is their ability of performing massive parallel computation. However, in order to exploit this competitive edge, it is necessary to extract the parallelism from the target algorithm to be executed, which is in general a very challenging task. This concept is demonstrated, for instance, by the poor performance achieved on relevant multimedia algorithms, such as Chambolle, which is a well-known algorithm employed for the optical flow estimation. The implementations of this algorithm that can be found in the state of the art are generally based on GPUs, but barely improve the performance that can be obtained with a powerful GPP. In this paper, we propose a novel approach to extract the parallelism from computation-intensive multimedia algorithms, which includes an analysis of their dependency schema and an assessment of their data reuse. We then perform a thorough analysis of the Chambolle algorithm, providing a formal proof of its inner data dependencies and locality properties. Then, we exploit the considerations drawn from this analysis by proposing an architectural template that takes advantage of the fine-grained parallelism of FPGA devices. Moreover, since the proposed template can be instantiated with different parameters, we also propose a design metric, the expansion rate, to help the designer in the estimation of the efficiency and performance of the different instances, making it possible to select the right one before the implementation phase. We finally show, by means of experimental results, how the proposed analysis and parallelization approach leads to the design of efficient and high-performance FPGA-based implementations that are orders of magnitude faster than the state-of-the-art ones

    A combined first and second order variational approach for image reconstruction

    Full text link
    In this paper we study a variational problem in the space of functions of bounded Hessian. Our model constitutes a straightforward higher-order extension of the well known ROF functional (total variation minimisation) to which we add a non-smooth second order regulariser. It combines convex functions of the total variation and the total variation of the first derivatives. In what follows, we prove existence and uniqueness of minimisers of the combined model and present the numerical solution of the corresponding discretised problem by employing the split Bregman method. The paper is furnished with applications of our model to image denoising, deblurring as well as image inpainting. The obtained numerical results are compared with results obtained from total generalised variation (TGV), infimal convolution and Euler's elastica, three other state of the art higher-order models. The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images -- a known disadvantage of the ROF model -- while being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure

    The growth stop phenomenon of baobabs (Adansonia spp.) identified by radiocarbon dating

    Get PDF
    The article reports the growth stop phenomenon, which was documented only for baobabs, i.e. for trees belonging to the Adansonia genus. The identification of growth stop was enabled by radiocarbon dating, which allows a complex investigation of samples collected from the trunk/stems of baobabs. In several cases, the outermost rings of baobabs, which were close to the bark, were found to be old, with ages of several hundreds of years, instead of being very young. Dating results of samples collected from six baobabs are presented. For multistemmed baobabs, the growth stop may occur only for one or several stems. We identified four factors that may induce the growth stop: (i) stress determined by severe climate conditions, (ii) old age, (iii) the need to keep a stable internal architecture, and (iv) the collapse of stems that survive this trauma. Baobabs and their stems affected by growth stop may survive for several centuries, by continuing to produce leaves, flowers, and fruits. This phenomenon was associated with the capacity of baobabs to store large amounts of water in their trunks/stems in the rainy season. This reservoir of water is used during the dry season and allows the trees to survive prolonged drought periods.Selected papers from the 2015 Radiocarbon Conference, Dakar, Senegal, 16–20 November 2015https://www.cambridge.org/core/journals/radiocarbonhj2018Mammal Research Institut

    3D Fluid Flow Estimation with Integrated Particle Reconstruction

    Full text link
    The standard approach to densely reconstruct the motion in a volume of fluid is to inject high-contrast tracer particles and record their motion with multiple high-speed cameras. Almost all existing work processes the acquired multi-view video in two separate steps, utilizing either a pure Eulerian or pure Lagrangian approach. Eulerian methods perform a voxel-based reconstruction of particles per time step, followed by 3D motion estimation, with some form of dense matching between the precomputed voxel grids from different time steps. In this sequential procedure, the first step cannot use temporal consistency considerations to support the reconstruction, while the second step has no access to the original, high-resolution image data. Alternatively, Lagrangian methods reconstruct an explicit, sparse set of particles and track the individual particles over time. Physical constraints can only be incorporated in a post-processing step when interpolating the particle tracks to a dense motion field. We show, for the first time, how to jointly reconstruct both the individual tracer particles and a dense 3D fluid motion field from the image data, using an integrated energy minimization. Our hybrid Lagrangian/Eulerian model reconstructs individual particles, and at the same time recovers a dense 3D motion field in the entire domain. Making particles explicit greatly reduces the memory consumption and allows one to use the high-res input images for matching. Whereas the dense motion field makes it possible to include physical a-priori constraints and account for the incompressibility and viscosity of the fluid. The method exhibits greatly (~70%) improved results over our recently published baseline with two separate steps for 3D reconstruction and motion estimation. Our results with only two time steps are comparable to those of sota tracking-based methods that require much longer sequences.Comment: To appear in International Journal of Computer Vision (IJCV
    corecore